Bridging the Gap between Stochastic Gradient MCMC and Stochastic Optimization: Supplementary Material
نویسندگان
چکیده
A Solutions for the sub-SDEs We provide analytic solutions for the split sub-SDEs in Section 4.1. For stepsize h, the solutions are given in (6). A : θ t = θ t−1 + G 1 (θ) p h p t = p t−1
منابع مشابه
Bridging the Gap between Stochastic Gradient MCMC and Stochastic Optimization
Stochastic gradient Markov chain Monte Carlo (SG-MCMC) methods are Bayesian analogs to popular stochastic optimization methods; however, this connection is not well studied. We explore this relationship by applying simulated annealing to an SGMCMC algorithm. Furthermore, we extend recent SG-MCMC methods with two key components: i) adaptive preconditioners (as in ADAgrad or RMSprop), and ii) ada...
متن کاملSupplementary Material for: On the Convergence of Stochastic Gradient MCMC Algorithms with High-Order Integrators
متن کامل
Numerical Solution of Optimal Heating of Temperature Field in Uncertain Environment Modelled by the use of Boundary Control
In the present paper, optimal heating of temperature field which is modelled as a boundary optimal control problem, is investigated in the uncertain environments and then it is solved numerically. In physical modelling, a partial differential equation with stochastic input and stochastic parameter are applied as the constraint of the optimal control problem. Controls are implemented ...
متن کاملSGD with Variance Reduction beyond Empirical Risk Minimization
We introduce a doubly stochastic proximal gradient algorithm for optimizing a finite average of smooth convex functions, whose gradients depend on numerically expensive expectations. Our main motivation is the acceleration of the optimization of the regularized Cox partial-likelihood (the core model used in survival analysis), but our algorithm can be used in different settings as well. The pro...
متن کاملConditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap
In this paper, we study the problem of constrained and stochastic continuous submodular maximization. Even though the objective function is not concave (nor convex) and is defined in terms of an expectation, we develop a variant of the conditional gradient method, called Stochastic Continuous Greedy, which achieves a tight approximation guarantee. More precisely, for a monotone and continuous D...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2016